logo

Acoustic Telemetry Data Analysis Workshop


Introductions

Your tutor for this OCS workshop…


My name is Ross Dwyer and I am a Senior Lecturer and leader of the Dwyer Movement Ecology Lab at the University of the Sunshine Coast.

I’ve been working with animal tracking data since the mid 2000s, and I’ve been lucky enough to work with a broad range of species - from shorebirds to crocodiles to whale sharks. These opportunities have largely come up due to the challenges in visualising, analysing, and interpreting animal tracking datasets, and the fact I was quick to learn that R is probably the best tool available to us to work with these complex data streams.

I use acoustic telemetry to gain insights in the the movement behaviours of a broad range of shark and ray species. If you are aware of the strengths and weaknesses of this data type, and are able to do some R-wizardry on these often large spatial datasets, there are a lot of interesting and useful applications. My current research programs are using acoustic telemetry data to quantify the effectiveness of MPAs, determine environmental preferences and movement drivers, assess overlap with anthropogenic threats, and to gain insights into social behaviours.

At UniSC I teach R to undergraduate students, and I have contributed to the development of three software packages designed specifically for the analysis of animal tracking data: ZoaTrack (formally OzTrack) - a web-based platform for analysing and visualizing animal tracking data (Dwyer et al. 2015); V-Track – an R package for analysing and visualizing acoustic telemetry data (Campbell et al. 2012; Udyawer et al. 2018); and remora - an R package to facilitate access to, quality-control, easy reporting and integration of acoustic telemetry datasets with IMOS data streams (https://imos-animaltracking.github.io/remora/).

We will cover VTrack and remora in this workshop!

Ross’s R codes can be found on his github page.

Speartooth shark (Glyphis glyphis)



Course outline

In this course you will learn about different ways to manage, analyse and interpret your aquatic telemetry datasets using Fathom and R.

This workshop will demonstrate how R can make the processing of spatial data much quicker and easier than using standard GIS software!

At the end of this workshop you will also have the annotated R code that you can re-run at any time, share with collaborators, and build on with those newly acquired datasets!

I designed this course not to comprehensively cover all the tools in R, but rather to give you an understanding of options on how to analyse your acoustic telemetry data.

Every new project comes with its own problems and questions and you will need to be independent, patient and creative to solve these challenges. It makes sense to invest time in becoming familiar with R, because today R is the leading platform for environmental data analysis and has some other functionalities which may surprise you!


This R workshop is intended to run across 3 sessions.


  • Session 1: Getting familiar with input data formats for acoustic telemetry
  1. Managing your data using Fathom and the IMOS ATF Australian Animal Acoustic Telemetry Database
  2. Exploring the different export formats of acoustic telemetry data


  • Session 2: Working with remora
  1. Using the remora R package to interactively explore your telemetry data
  2. Undertaking Quality control checks on your raw detection data
  3. Extracting and appending environmental variables to acoustic telemetry data


  • Session 3: Working with VTrack
  1. Using the VTrack R package to summary tables of your ags and receivers
  2. Generating animations of your animal tracks in Google Earth
  3. Extracting Residecy and Movement files from your tracking data


These sessions will address the fundamentals of data pre-processing and visualisation, utilising the tidyverse, sf, ggplot2 R packages.


Participants will learn efficient data cleaning methods, organisation of telemetry data, and the creation of geospatial visualisations to gain insights into animal movements.



Course Resources

The course resources will be emailed to you prior to the workshop. However, you can also access the data and scripts we will work through in this course following these steps:

Ross remember to save my file here!!!!

1. Download the course materials from this link.

2. Unzip the downloaded file. It should have the following two folders:

  • code folder
  • data folder

3. Save all the files in a location on your computer you can find again during the workshop


Back to top



Software installation


Processing and analysing large datasets like those from animal telemetry work can require a huge investment in time: rearranging data, removing erroneous values, purchasing, downloading and learning the new software, and running analyses. Furthermore merging together Excel spreadsheets, filtering data and preparing data for statistical analyses and plotting in different software packages can introduce all sorts of errors.

R is a powerful language for data wrangling and analysis because…

  • It is relatively fast to run and process commands
  • You can create repeatable scripts
  • You can trace errors back to their source
  • You can share your scripts with other people
  • It is easy to identify errors in large data sets
  • Having your data in R opens up a huge array of cutting edge analysis tools.
  • R is also totally FREE!


Installing packages

Part of the reason R has become so popular is the vast array of packages that are freely available and highly accessible. In the last few years, the number of packages has grown exponentially > 10,000 on CRAN! These can help you to do a galaxy of different things in R, including running complex analyses, drawing beautiful figures, running R as a GIS, constructing your own R packages, building web pages and even writing R course handbooks like this one!

Let’s suppose you want to load the sf package to access this package’s incredible spatial functionality. If this package is not already installed on your machine, you can download it from the web by using the following command in R.

install.packages("remotes", repos='http://cran.us.r-project.org')

In this example, remotes is the package to be downloaded and ‘http://cran.us.r-project.org’ is the repository where the package will be accessed from.


More recently, package developers have also used other platforms like GitHub to house R packages. This has enabled users to access packages that are actively being updated and enable developers to fix problems and develop new features with user feedback.

The remotes and devtools R packages have enabled the installation of packages directly from platforms like GitHub. For example, if we want want to download the remora package from the github repository, we can use the install_github() package to do it like this:

## Install packages
remotes::install_github("IMOS-AnimalTracking/remora", build_vignettes = TRUE)

Installation instructions:

For this course, make sure you have downloaded and installed the most updated versions of the following software:


1. Download R for your relevant operating system from the CRAN website

2. Download RStudio for your relevant operating system from the RStudio website

3. Once you’ve installed the above software, make sure you install the following packages prior to the start of the course

## Packages that are on CRAN
install.packages(c("tidyverse",
                   "sf",
                   "terra",
                   "raster",
                   "lubridate",
                   "leaflet",
                   "remotes",
                   "plotly"
                   ))

## Install packages from GitHub and other external repositories
remotes::install_github("r-spatial/mapview", build_vignettes = TRUE)
remotes::install_github("pbreheny/visreg", build_vignettes = TRUE)
remotes::install_github("RossDwyer/VTrack", build_vignettes = FALSE)

When downloading from GitHub, the R console will sometimes ask you if it should also update other packages. In most cases, you can skip updating other packages (option [3]) as this often stalls the whole process of downloading the package you want. If there are package dependencies, these will often be downloaded automatically (but not always!). If you want to update the other packages you have, you can do it separately.


Back to top



Session 1

Data management tools for acoustic telemetry


Currently there are several data management tools that are available for storing, cleaning, exploring and analysing data obtained using acoustic telemetry in the Oceania region.


VUE

Innovasea’s VUE software has long been the main way to communicate with Vemco/Innovasea receivers, to run receiver diagnostics, and to offload and store data.



However, if you have received acoustic tags that use Innovasea’s newest “Generation 2” tag code spaces such as A69-2801 (pinger) or A69-2951 (sensor), these are not visible in the VUE software. Receiver files recently offloaded with VUE and imported or exported from a VUE database may therefore not display all detection data.

As of 2024, VUE has been discontinued and has been replaced by Innovasea’s Fathom suite of software (i.e., Fathom Connect, Fathom Mobile and Fathom Central).

It has always been important to ensure your acoustic receivers and the VUE software installed on your computer has the most recent software updates installed so that they detect any acoustic tags in the local vicinity. As VUE has been discontinued, it’s likely that these software updates will soon cease to become available on this platform.





Back to top


Fathom

Innovasea’s Fathom software (Fathom Connect, Fathom Mobile, and Fathom Central) are the recommended software to prepare, collect, offload, analyse and share acoustic telemetry data collected on Innovasea receivers.



For more information, check out this link to the various Fathom platforms from Innovasea’s website - https://www.youtube.com/watch?v=KW_2mUGdvx8&t=60s



The Fathom Mobile App will work on an Android phone or tablet, and offers the capacity to connect to, initialize, and offload VR2-family receivers, record deployment locations, and keep track of animal tagging metadata.



The Fathom Connect software will work on a Windows computer, and lets you configure, initialize, and offload data from Innovasea receivers.




Fathom Central is a cloud-based storage hub for saving, managing and visualizing your acoustic telemetry data. It provides simple online tools and helps you quickly prepare detection data and metadata for export to telemetry networks.





Back to top





IMOS ATF

In addition to Innovasea’s own software, there are also telemetry networks developed to store and share acoustic telemetry data. The Australian Animal Acoustic Telemetry Database maintained by Integrated Marine Observing System’s Animal Tracking Facility (IMOS ATF) houses acoustic telemetry datasets collected around the Oceania region, and users can store and access acoustic telemetry data through the online database.



Telemetry networks offer several advantages over VUE or Fathom when working with acoustic telemetry data: - The tag metadata and receiver metadata can be exported along with the detection data. This means that station name associated with a receiver deployment are exported in the detection dataset, and dual sensor tags are allocated to the same tagged individuals with a unique tag code.

  • Provided the tags and receivers are BOTH registered in the Australian Animal Acoustic Telemetry Database, you will be notified via email if your tag is detected on anyone else’s receivers.

  • It is possible to check in the Australian Animal Acoustic Telemetry Database to see if anyone has receivers in the viciniy of your tagging site (or along your species migraion routes) have any receivers deployed in the area.

Once you have exported your tag detection data from Fathom Central via the FILES tab, it is very easy to upload these to IMOS ATF’s Australian Animal Acoustic Telemetry Database. The database accepts single files or the zip file exported from Fathom Central with the collection of Exported .CSV files.







Exploring the different export formats of acoustic telemetry data


Regardless of whether you are working in VUE, Fathom Central or IMOS ATF’s Australian Animal Acoustic Telemetry Database, each data source has their own data export formats, which are not always interchangeable when using with R packages.

In general, acoustic telemetry datasets have at least 3 components that are required for analyses:

  1. Detection data: This includes the only the presence of tagged individuals on specific receivers.
  2. Transmitter metadata: This includes metadata information on the tag specifications. Sometimes this also includes metadata of the animal tagged.
  3. Receiver metadata: This includes coordinates of all the recievers used to monitor tagged animals in the study system.

Here we will go through 3 different formats that acoustic telemetry data can come in, and how each are structured. This is not an extensive list, but just includes the main formats currently used by software and expected by R packages.

If you want to have a closer look at these data formats, I have provided 3 example datasets in the Data export formats folder within the data folder you have downloaded.





VUE format

Exporting detection data from VUE provides only a single file. This includes only the detection data. The researcher is responsible to keep metadata for each receiver within the array and tag deployment, which are also needed for a full analysis.








Fathom format

Fathom Central allows you to export detection data in two different .CSV formats.

The more simplified format can be exported from the DETECTIONS tab in Fathom Central which exports a .CSV file with only 5 header rows:

  • Date Time: Date and time in UTC
  • Full ID: the full tag ID code including the Code Space
  • Serial number: the Receiver serial number
  • Sensor Value: the sensor value of the tag
  • Sensor: the sensor type of the tag (e.g. Temperature, Depth)





The recommended and best way to export tag detection data from Fathom Central is via the FILES tab.





This downloads the selected receiver files (.vrl, .vdat) collectively as a .zip file, with a seperate .CSV for each receiver. These exported detection files have a more complex format than the old VUE export, and weaves multiple datasets together into the same file.

Not only does this export include tag detection information, but it also includes a range of other environmental and diagnostic data stored on the receiver unit. This allows you to export ambient temperature, depth, and ambient noise data from your receivers that could be useful for your research questions.

This information is included in the first 26 rows that define the field names for each data record type (blue block below). The first column in each line of the dataset after this block (orange column below) indicates what data type each row contains.





The range of data types are as follows:

  • DET: detections
  • DIAG: receiver diagnostics
  • DEPTH: ambient depth (sensor) data
  • TEMP: ambient temperature (sensor) data
  • BATTERY: receiver battery health
  • CFG_STATION: receiver/array metadata

This format of data will require a fair amount of formatting prior to it being used for further analysis if you are using R to analyse your data.

Researchers must introduce steps in their workflow to (1) allocate the station names associated with each receiver deployment, and (2) to manually remove any tag detections that occurred before tag and receiver deployment, and after tag or receiver recovery.







IMOS ATF format

If transmitters have successfully been deployed in the Australian Animal Acoustic Telemetry Database, and receievers have also successfully been deployed and recovered, it is possible to access animal detections (and the associated tag and receiver metadata) via the Australian Animal Acoustic Telemetry Database



Detection data exported from the Australian Animal Acoustic Telemetry Database has its own format. The database website allows researchers to access and download detection, tag metadata and receiver metadata for a selected tagging project. This data export have a large number of columns that provide a comprehensive information associated with each detection, tag or receiver. The format of the detection data includes the following 32 column names:



The database webpage also allows users to download complementary receiver metadata that has 15 columns:



As well as tag metadata with 24 columns:



If size and other biological variables were collected for individuals (can be multiple measures) and additional animal measures file can also be downloaded:











Back to top





Session 2

Working with remora


For this part of the session, we will go through some of the functionality of the new remora package. This package was created to assist users of the Australian Animal Acoustic Telemetry Database to easily explore and analyse their data. The intention is that data exported and downloaded from the web portal can feed directly into the package to do quick analyses. The package also enables the integration of animal telemetry data with oceanographic observations collected by IMOS and other ocean observing programs. The package includes functions that:

  • Interactively explore animal movements in space and time from acoustic telemetry data
  • Perform robust quality-control of acoustic telemetry data using the method described by Hoenner et al. 2018
  • Identify available remote sensed and sub-surface, in-situ oceanographic datasets that spatially and temporally overlap animal movement data
  • Once identified, the package assists in extracting and appending these variables to the movement data


The package follows the following rough workflow to enable project reporting, data quality control and environmental data extraction:


First load the remora package in your workspace.

library(remora)


Next let’s load the other required packages for this session.

library(tidyverse)
library(sf)
library(mapview)
library(ggspatial)

Once the remora package has been loaded, we can explore the functionality of the package using vignettes that describe the different functions.

browseVignettes(package = "remora")


Interactively explore animal movements in space and time

One of the main functions of the remora package is the ability to interactively explore data without further coding in R.

We can use the shinyReport() function to create a report based on your receiver data or transmitter data. Both these reports produce lots of interesting metrics and maps to explore your data in depth.

## Create and explore a receiver array report
shinyReport(type = "receivers")

## Create and explore a transmitter report
shinyReport(type = "transmitters")


For more information on these functions check out the vignette in the remora package

vignette("shinyReport_receivers", package = "remora")
vignette("shinyReport_transmitters", package = "remora")


Undertaking Quality control checks on your raw detection data

We an now use the functionality of remora to conduct quality control checks with our IMOS_pigeye_sample_dataset in the data folder.

For the package to find all the data in the correct place, we will make a list of locations of where all our particular files live on your computer.

files <- list(det = "data/IMOS_pigeye_sample_dataset/IMOS_detections.csv",
              rmeta = "data/IMOS_pigeye_sample_dataset/IMOS_receiver_deployment_metadata.csv",
              tmeta = "data/IMOS_pigeye_sample_dataset/IMOS_transmitter_deployment_metadata.csv",
              meas = "data/IMOS_pigeye_sample_dataset//IMOS_animal_measurements.csv")

files


We can now use the runQC() function to conduct a comprehensive quality control algorithm

tag_qc <- runQC(x = files, .parallel = TRUE, .progress = TRUE)


After running this code, each detection will have additional columns appended to it. These columns provide the results of each of 7 quality check conducted during this step. The QC algorithm tests 7 aspects of the detection data and grades each test as per below. An overall Detection_QC value is then calculated to provide rankings of 1: valid; 2: likely valid; 3: unlikely valid; or 4: invalid detection


You can now access each component of the results of the QC process using the grabQC() function

## this will only grab the QC flags resulting from the algorithm
grabQC(tag_qc, what = "QCflags")

## this will extract all the relevant data as well as only detections that were deemed `valid` and `likely valid`
qc_data <- grabQC(tag_qc, what = "dQC", flag = c("valid", "likely valid"))

qc_data # our good qc'd dataset with likely valid detections

# write_rds(qc_data,"data/IMOS_pigeye_sample_dataset/IMOS_qc_data.RDS")

nqc_data <- grabQC(tag_qc, what = "dQC", flag = c("likely invalid","invalid"))
nqc_data # shows our 2 flagged animals 


We can now visualise the QC detection process, mapping detections and their resulting QC flags

plotQC(tag_qc)


For more information on these functions, check out the vignette in the remora package.

vignette("runQC", package = "remora")


Extracting and appending environmental variables to acoustic telemetry data

We can also use remora to identify environmental data (currently only within Australia) that overlap (spatially and temporally) with your animal telemetry data. The full list of variables you can access and append directly from R can be found using the imos_variables() function.

imos_variables()


In-situ data from mooring data from the IMOS National Mooring Network

If sub-sea variables are of interest, then the remora package can be used to access, extract and append data from the nearest Oceanographic mooring deployed by the IMOS National Mooring Network. This can be done using the extractMoor() function. But before using this, we need to find the moorings that would be most relevant.


Lets use the full example dataset to see which moorings would be the closest and provide in-situ temperature data. We can access the metadata for all moorings that record temperature data

moor_temp <- mooringTable(sensorType = "temperature")

we can now map the full network


moor_temp %>% 
  st_as_sf(coords = c("longitude", "latitude"), crs = 4326) %>% 
  mapview(popup = paste("Site code", moor_temp$site_code,"<br>",
                        "URL:", moor_temp$url, "<br>",
                        "Standard names:", moor_temp$standard_names, "<br>",
                        "Coverage start:", moor_temp$time_coverage_start, "<br>",
                        "Coverage end:", moor_temp$time_coverage_end),
          col.regions = "red", color = "white", layer.name = "IMOS Mooring")

If you have tagged animals that move down the cost past multiple receiver mooring stations, you can find the closest mooring to each tag detection using remora’s getDistance() function.

det_dist <- getDistance(trackingData = qc_data, 
                        moorLocations = moor_temp,
                        X = "receiver_deployment_longitude",
                        Y = "receiver_deployment_latitude",
                        datetime = "detection_datetime")

mooring_overlap <- getOverlap(det_dist)
mooring_overlap # GBRLSL is the closest mooring with 5073 temperature records


Confirms that GBRLSL is the closest mooring available to us. Note that this mooring is probably too far away from our fieldsite to be accurate, but lets presume ot’s ok for the purpose of this workshop.

Now that we have identified the moorings to extract data from we can run the mooringDownload() function to download the data from the AODN.

## Download mooring data from closest moorings
moorIDs <- unique(mooring_overlap$moor_site_code)

moor_data <- mooringDownload(moor_site_codes = "GBRLSL",
                             sensorType = "temperature",
                             fromWeb = TRUE,
                             file_loc = "imos.cache/moor/temperature")


We can now visualise the temperature profile data alongside the animal detection data for a temporal subset of the data

## Plot depth time of temperature from one mooring along with the detection data
start_date <- "2023-01-01"
end_date <- "2025-02-01"

plotDT(moorData = moor_data$GBRLSL, 
       moorName = "GBRLSL",
       dateStart = start_date, dateEnd = end_date,
       varName = "temperature",
       trackingData = det_dist,
       speciesID = "Carcharhinus amboinensis",
       IDtype = "species_scientific_name",
       detStart = start_date, detEnd = end_date)


By default, each of the environmental sensors positioned on a mooring will be returned for a certain timestamp. For example, if 10 sensors are positioned on a mooring, each hourly timestamp for that mooring witll have 10 sensor readings.

data_with_mooring_sst_all <- extractMoor(
  trackingData=det_dist,
  file_loc="imos.cache/moor/temperature",
  sensorType = "temperature",
  timeMaxh = Inf,
  distMaxkm = Inf,
  targetDepthm = NA,
  scalc = c("min", "max", "mean")
)

We can see the extracted environmental variables are appended as new columns to the input dataset as a nested tibble. You can look at the contents of this by using the unnest() function in the tidyr package.

data_with_mooring_sst_all # By default the output is a nested tibble

You can increase the sensitivity of the extractMoor() function by setting a maximum time threshold (in hours) for the period you are willing to allow between tag detection and mooring timestamps (timeMaxh). You can also add a distance threshold (distMaxkm) for maximum allowed distance (in kilometers) between moorings and receiver station locations.

In this example, we will limit our function so that only detections within 24h and 500km of moorings are allocated with sensor value readings.

As each mooring line has multiple sensors deployed at various depths, we can specify which sensor we want returned by setting a targetDepthm. Here we set this as targetDepthm=0 to return the sensor value that is nearest to the water surface.

# Run the same function again with time and distance thresholds
# and when multiple sensors are available return the shallowest value at that time stamp
data_with_mooring_sst_shallow <- extractMoor(trackingData = det_dist,
                                             file_loc="imos.cache/moor/temperature",
                                             sensorType = "temperature",
                                             timeMaxh = 24,
                                             distMaxkm = 500,
                                             targetDepthm=0, 
                                             scalc="min")

data_with_mooring_sst_shallow

Notice that this new merged dataset data_with_mooring_sst_shallow is now much smaller than the original one data_with_mooring_sst_all? This is because we have extracted only selected a single sensor value for each detection timestamp. We have also dropped those rows from the dataset which not did meet the time and distance thresholds.

# Unnest tibble to reveal the first 10 rows of the data
data_with_mooring_sst_all %>% 
  tidyr::unnest(cols = c(data))


Now let’s see how the detections of our tagged animals corresponded with variations in sea water temperature. To do this, we will generate an abacus plot of each of the transmitters plotted through time.

We will use the summarise function in the dplyr package to count the number of detections for each tag each day and to take an average of mooring derived temperatures.

We will use the ggplot2 package to plot an abacus (detection plots), with transmitter_id on the y axis, detection_datetime on the x axis and the points coloured by the in-situ temperature and sized according to the number of detections.

summarised_data_id <-
  data_with_mooring_sst_shallow %>% 
  tidyr::unnest(cols = c(data)) %>% 
  mutate(date = as.Date(detection_datetime)) %>% 
  group_by(transmitter_id, date) %>% 
  dplyr::summarise(num_det = n(),
                   mean_temperature = mean(moor_sea_temp, na.rm = T))

library(ggplot2)
ggplot(summarised_data_id, aes(x = date, y = transmitter_id, size = num_det, color = mean_temperature)) +
  geom_point() +
  scale_color_viridis_c() +
  labs(subtitle = "In-situ sea water temperature (°C)") +
  theme_bw()


Like the other functions of remora we have covered quickly above, there is far more functionality that we just dont have time to cover here. To learn more features including accessing and appending other data and at specific depths check out the function’s vignette

vignette("extractMoor", package = "remora")






Back to top





Session 3

Working with VTrack


The VTrack R package was built in 2012 by researchers at the University of Queensland to facilitate the assimilation, analysis and synthesis of animal location data collected by the VEMCO/Innovasea suite of acoustic transmitters and receivers.

As well as database and visualisation capabilities, VTrack provides functions to generate summary tables and to explore patterns in movement behaviour from tag detection and sensor data (e.g. residency time, movements between receivers/station connectivity, diving/surfacing, and basking/cooling events). This procedure condenses acoustic detection datasets by orders of magnitude, facilitating the synthesis of acoustic detection data.

In 2018 this was improved by the addition of the Animal Tracking Toolbox (ATT) to the VTrack package. ATT calculates standardised metrics of dispersal and activity space to enable direct comparisons between animals tracked within the same study and between studies or locations. For a comprehensive walk through of the ATT extension in VTrack, go through the examples on this page.


In this session we will go through a brief walk through of how we can use the VTrack R package to quickly format and analyse our pigeye tracking dataset from Cape York.

For this session, we will use the same data you worked on in session 2, however we will use the IMOS-pigeye-sample-dataset in the data folder you have downloaded.


Installing VTrack


The most recent version of the VTrack package is not currently available on CRAN, but it can be downloaded from GitHub.

## Load other useful packages
library(VTrack)
library(tidyverse)
library(lubridate)
library(sf)
library(mapview)


Input, explore and format data from IMOS repository to use in VTrack

Let’s have a look at the detection, tag and receiver/station metadata in R using the tidyverse.

# First lets use the QC data from session 2
qc_data <- read_rds("data/IMOS_pigeye_sample_dataset/IMOS_qc_data.RDS")
# detections <- read_csv("data/IMOS_pigeye_sample_dataset/IMOS_detections.csv")

animal_measurements <- read_csv("data/IMOS_pigeye_sample_dataset/IMOS_animal_measurements.csv") %>% 
  filter(measurement_type=="TOTAL LENGTH")

tag_metadata <- 
  read_csv("data/IMOS_pigeye_sample_dataset/IMOS_transmitter_deployment_metadata.csv") %>% 
  left_join(animal_measurements)
      
station_info <- read_csv("data/IMOS_pigeye_sample_dataset/IMOS_receiver_deployment_metadata.csv")


We will then format it so that VTrack can read the column names correctly. The transmute function in the dplyr package is very useful for rearranging data into the format that you need for various packages.

# Get the detection dataset from IMOS into the right format for VTrack
detections <-
  qc_data %>% 
  left_join(tag_metadata,keep=FALSE) %>% 
  transmute(transmitter_id = transmitter_id,
            station_name = station_name,
            receiver_name = receiver_name,
            detection_timestamp = detection_datetime,
            longitude = receiver_deployment_longitude,
            latitude = receiver_deployment_latitude,
            sensor_value = (transmitter_sensor_slope*transmitter_sensor_raw_value+transmitter_sensor_intercept),
            sensor_unit = transmitter_sensor_unit)

# Get the transmitter deployemnt dataset from IMOS into the right format for VTrack
tag_metadata <-
  tag_metadata %>% 
  transmute(tag_id = transmitter_deployment_id,
            transmitter_id = transmitter_id,
            scientific_name = species_scientific_name,
            common_name = species_common_name,
            tag_project_name = tag_device_project_name,
            release_id = transmitter_deployment_id,
            release_latitude = transmitter_deployment_latitude,
            release_longitude = transmitter_deployment_longitude,
            ReleaseDate = transmitter_deployment_datetime,
            tag_expected_life_time_days = transmitter_estimated_battery_life,
            tag_status = transmitter_status,
            sex = animal_sex,
            measurement = measurement_value)

# Get the receiver station deployemnt dataset from IMOS into the right format for VTrack
station_info <-
  station_info %>% 
  transmute(station_name = station_name,
            receiver_name = receiver_name,
            installation_name = installation_name,
            project_name = receiver_project_name,
            deploymentdatetime_timestamp = receiver_deployment_datetime,
            recoverydatetime_timestamp = receiver_recovery_datetime,
            station_latitude = receiver_deployment_latitude,
            station_longitude = receiver_deployment_longitude,
            status = active)


We can now set up the data so that VTrack can then read and analyse data properly.

input_data <- setupData(Tag.Detections = detections,
                        Tag.Metadata = tag_metadata,
                        Station.Information = station_info,
                        source = "IMOS")

summary(input_data)


The setup data is now a list containing all the components of data required for analyses. You can access each component seperately by selecting each component of the list

# Detection information
input_data$Tag.Detections

# Tag information
input_data$Tag.Metadata

# Station deployment information
input_data$Station.Information


Examine patterns in detection and dispersal

Lets plot an abacus plot to show the detection patterns of our tagged pigeye sharks.

## plot an abacus plot of tag id
input_data$Tag.Detections %>% 
  mutate(date = date(Date.Time)) %>% 
  group_by(Transmitter, Station.Name, date) %>% 
  summarise(num_detections = n()) %>% 
  ggplot(aes(x = date, y = Transmitter, size = num_detections, color = Station.Name)) +
  geom_point() +
  labs(size = "Number of Detections", color = "Station Name") +
  theme_bw()


Next, plot another abacus plot, but this time showing the station names that tag id A69-9001-54409 was detected at.

## plot an abacus plot of tag id
input_data$Tag.Detections %>% 
  mutate(date = date(Date.Time)) %>% 
  group_by(Transmitter, Station.Name, date) %>% 
  summarise(num_detections = n()) %>% 
  filter(Transmitter=="A69-9001-54409") %>% 
  ggplot(aes(x = date, y = Station.Name, size = num_detections, color = Station.Name)) +
  geom_point() +
  labs(size = "Number of Detections", color = "Station Name") +
  theme_bw()


You can also map the data to explore spatial patterns

## Map the data
input_data$Tag.Detections %>% 
  filter(Transmitter=="A69-9001-54409") %>% 
  group_by(Station.Name, Latitude, Longitude) %>% 
  summarise(num_detections = n()) %>% 
  st_as_sf(coords = c("Longitude", "Latitude"), crs = 4326) %>% 
  mapview(cex = "num_detections", 
          zcol = "Station.Name")



Generating summary tables


We can now use the detectionSummary() and dispersalsSummary() functions to calculate overall and monthly subsetted detection and dispersal metrics.

## Summarise detections patterns
det_sum <- detectionSummary(ATTdata = input_data, sub = "%Y-%m")

summary(det_sum)


Here we have set the sub parameter to %Y-%m (monthly subset), weekly subsets can also be calculated using %Y-%W. The function calculates Overall metrics as well as subsetted metrics, you can access them by selecting each component of the list output.

det_sum$Overall
det_sum$Subsetted


Plotting monthly patterns in detection

We can then plot the results to have a look at monthly patterns in detection index between sexes of bull sharks tracked throughout the project.

# Calculate the mean and standard deviations across sex and month
monthly_detection_index <-
  det_sum$Subsetted %>% 
  mutate(date = lubridate::ymd(paste(subset, 01, "-")),
         month = month(date, label = T, abbr = T)) %>% 
  group_by(Sex, month) %>% 
  summarise(mean_DI = mean(Detection.Index),
            se_DI = sd(Detection.Index)/sqrt(n()))

# Generate a plot using these data
monthly_detection_index %>% 
  ggplot(aes(x = month, y = mean_DI, group = Sex, color = Sex,
             ymin = mean_DI - se_DI, ymax = mean_DI + se_DI)) +
  geom_point() +
  geom_path() +
  geom_errorbar(width = 0.2) +
  labs(x = "Month of year", y = "Mean Detection Index") +
  theme_bw()


Similarly, we can use the dispersalSummary() function to do the same analysis to understand how dispersal distances moved by individuals change over the year for each sex of bull shark.

## Summarise dispersal patterns
disp_sum <- dispersalSummary(ATTdata = input_data)

disp_sum

monthly_dispersal <-
  disp_sum %>% 
  mutate(Consecutive.Dispersal=units::drop_units(Consecutive.Dispersal)) %>% 
  drop_na(Sex,Consecutive.Dispersal) %>% 
  mutate(month = month(Date.Time, label = T, abbr = T)) %>% 
  group_by(Sex, month) %>% 
  summarise(mean_disp = mean(Consecutive.Dispersal),
            se_disp = sd(Consecutive.Dispersal)/sqrt(n())) %>%
  ungroup()

monthly_dispersal %>% 
  ggplot(aes(x = month, y = mean_disp, group = Sex, color = Sex,
             ymin = mean_disp - se_disp, ymax = mean_disp + se_disp)) +
  geom_point() +
  geom_path() +
  geom_errorbar(width = 0.2) +
  labs(x = "Month of year", y = "Mean Dispersal distance (m)") +
  theme_bw()



Generating animations in Google Earth Pro

Google Earth Pro offers a simple yet powerful way of visualising your acoustic tracking data through time. However pulling detection datasets into Google Earth can be challenging given the size of many detection files. The VTrack R package has a few handy functions for visualising your tag detections as track animations in Google Earth).

For this to work, your receiver locations MUST be in the WGS84 coordinate reference system (CRS) and you will need to have Google Earth downloaded on your machine. If you have not already got it, Google Earth can be downloaded for free here

First let’s load our VTrack detection dataset into the shortened VTrack archive format.

# turn VTrack ATT detecion datset into VTrack archive using the dplyr functions
Varchive <- input_data$Tag.Detections %>% 
  transmute(DATETIME=as.POSIXct(Date.Time),
         TRANSMITTERID=as.character(Transmitter),
         SENSOR1=Sensor.Value,
         UNITS1=as.character(Sensor.Unit),
         RECEIVERID=as.character(Receiver),
         STATIONNAME=as.character(Station.Name)) %>% 
  data.frame() # needs to be in this format to run the old VTrack functions

Varchive

Next get the Station information into the shortened Vtrack format

Vpoint_file <- input_data$Station.Information %>% 
  group_by(Station.Name) %>% 
  filter(Deployment.Date == max(Deployment.Date)) %>% # Extracts only the latest deployment location for his receiver
  distinct() %>% 
  ungroup() %>% 
  transmute(LOCATION=Station.Name,
            LATITUDE=Station.Latitude,
            LONGITUDE=Station.Longitude,
            RADIUS=0) %>% 
  data.frame() # needs to be in this format to run the old VTrack functions
  
Vpoint_file 


Once we have our dataset in the VTrack archive format and a seperate data frame containing the receiver locations, we can then run VTrack’s KML creator functions. GenerateAnimationKMLFile_Track() generates a moving arrow for a single transmitter as it moves between the detection field of adjacent receiver stations.

unique(Varchive$TRANSMITTERID) # Extracts the transmitter code
# [1] "A69-9001-48586" "A69-9001-48595" "A69-9001-48608" "A69-9001-48620" "A69-9001-48621" "A69-9001-54409"

# Run the function to generate the KML for a single transmitter
GenerateAnimationKMLFile_Track(Varchive, # VTrack archive file
                               "A69-9001-48586", # Transmitter code
                               data.frame(Vpoint_file), # points file
                               "images/A69-9001-48586.kml", # file name
                               "cc69deb3", # colour of the track
                               sLocation="STATIONNAME")


Generating residences and movements events from detection data

We would then use this within the RunResidenceExtraction() function in VTrack to extract the movements between hydrophone stations and link this to our measure of distance travelled

Res <- RunResidenceExtraction(Varchive,  # Our data frmae
                              "STATIONNAME",  # Whether we want to work with receiver serial numbers of station names 
                              1, # the minimun number of detections at a receiver to record the presence of a tag             
                              60*60*12, # the time period between detections before a residence efent 'times out'
                              sDistanceMatrix = NULL) # our distance matrix containing distances between receiver locations

# Our residence file
head(Res$residences)

# Our log file
head(Res$residenceslog)

# Our movements file
head(Res$nonresidences) %>%
  filter (STATIONNAME1!=STATIONNAME2) %>%
  head(3)


Now extract the number of movements made between each of the receivers for each Tag ID…

# tally the number of movements between each station
Res$nonresidences %>%
  filter (STATIONNAME1!=STATIONNAME2) %>%
  group_by(TRANSMITTERID,STATIONNAME1,STATIONNAME2) %>% 
  count()


Back to top